Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds
نویسندگان
چکیده
There are two main approaches to the distributed representation of words: lowdimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributionalmodel vectors – as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
منابع مشابه
Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics
Lexical entailment, such as hyponymy, is a fundamental issue in the semantics of natural language. This paper proposes distributional semantic models which efficiently learn word embeddings for entailment, using a recently-proposed framework for modelling entailment in a vectorspace. These models postulate a latent vector for a pseudo-phrase containing two neighbouring word vectors. We investig...
متن کاملCombining pattern recognition and deep-learning-based algorithms to automatically detect commercial quadcopters using audio signals (Research Article)
Commercial quadcopters with many private, commercial, and public sector applications are a rapidly advancing technology. Currently, there is no guarantee to facilitate the safe operation of these devices in the community. Three different automatic commercial quadcopters identification methods are presented in this paper. Among these three techniques, two are based on deep neural networks in whi...
متن کاملPairwise Inner Product Distance: Metric for Functionality, Stability, Dimensionality of Vector Embedding
In this paper, we present a theoretical framework for understanding vector embedding, a fundamental building block of many deep learning models, especially in NLP. We discover a natural unitary-invariance in vector embeddings, which is required by the distributional hypothesis. This unitary-invariance states the fact that two embeddings are essentially equivalent if one can be obtained from the...
متن کاملLeveraging Lexical Resources for Learning Entity Embeddings in Multi-Relational Data
Recent work in learning vector-space embeddings for multi-relational data has focused on combining relational information derived from knowledge bases with distributional information derived from large text corpora. We propose a simple approach that leverages the descriptions of entities or phrases available in lexical resources, in conjunction with distributional semantics, in order to derive ...
متن کاملEfficient Method Based on Combination of Deep Learning Models for Sentiment Analysis of Text
People's opinions about a specific concept are considered as one of the most important textual data that are available on the web. However, finding and monitoring web pages containing these comments and extracting valuable information from them is very difficult. In this regard, developing automatic sentiment analysis systems that can extract opinions and express their intellectual process has ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1312.5559 شماره
صفحات -
تاریخ انتشار 2013